Elastic-net regularization in learning theory
نویسندگان
چکیده
Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie [H. Zou, T. Hastie, Regularization and variable selection via the elastic net, J. R. Stat. Soc. Ser. B, 67(2) (2005) 301–320] for the selection of groups of correlated variables. To investigate the statistical properties of this scheme and in particular its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combinations of elements (features) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular ‘‘elastic-net representation’’ of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed in the above-cited work. © 2009 Elsevier Inc. All rights reserved. ∗ Corresponding author at: Center for Biological and Computational Learning, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, MA 02139, United States. E-mail addresses: [email protected] (C. De Mol), [email protected] (E. De Vito), [email protected] (L. Rosasco). 0885-064X/$ – see front matter© 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.jco.2009.01.002 202 C. De Mol et al. / Journal of Complexity 25 (2009) 201–230
منابع مشابه
TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net
A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS s...
متن کاملFast Learning Rate of Multiple Kernel Learning: Trade-Off between Sparsity and Smoothness
We investigate the learning rate of multiple kernel leaning (MKL) with l1 and elastic-net regularizations. The elastic-net regularization is a composition of an l1-regularizer for inducing the sparsity and an l2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relative...
متن کاملRegularization Strategies and Empirical Bayesian Learning for MKL
Multiple kernel learning (MKL), structured sparsity, and multi-task learning have recently received considerable attention. In this paper, we show how different MKL algorithms can be understood as applications of either regularization on the kernel weights or block-norm-based regularization, which is more common in structured sparsity and multi-task learning. We show that these two regularizati...
متن کاملSemi-supervised Elastic net for pedestrian counting
Pedestrian counting plays an important role in public safety and intelligent transportation. Most pedestrian counting algorithms based on supervised learning require much labeling work and rarely exploit the topological information of unlabelled data in a video. In this paper, we propose a SemiSupervised Elastic Net (SSEN) regressionmethod by utilizing sequential information between unlabelled ...
متن کاملAn Elastic Net Orthogonal Forward Regression Algorithm
In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optim...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- J. Complexity
دوره 25 شماره
صفحات -
تاریخ انتشار 2009